骨质疏松症是一种常见疾病,可增加骨折风险。髋部骨折,尤其是在老年人中,导致发病率增加,生活质量降低和死亡率增加。骨质疏松症在骨折前是一种沉默的疾病,通常仍未被诊断和治疗。通过双能X射线吸收法(DXA)评估的面骨矿物质密度(ABMD)是骨质疏松诊断的金标准方法,因此也用于未来的骨折预测(Pregnosticic)。但是,所需的特殊设备在任何地方都没有广泛可用,特别是对于发展中国家的患者而言。我们提出了一个深度学习分类模型(形式),该模型可以直接预测计算机断层扫描(CT)数据的普通X光片(X射线)或2D投影图像。我们的方法是完全自动化的,因此非常适合机会性筛查设置,确定了更广泛的人群中的高风险患者而没有额外的筛查。对男性骨质疏松症(MROS)研究的X射线和CT投影进行了训练和评估。使用了3108张X射线(89个事件髋部骨折)或2150 CTS(80个入射髋部骨折),并使用了80/20分。我们显示,表格可以正确预测10年的髋部骨折风险,而验证AUC为81.44 +-3.11% / 81.04 +-5.54%(平均 +-STD),包括其他信息,例如年龄,BMI,秋季历史和健康背景, X射线和CT队列的5倍交叉验证。我们的方法显着(p <0.01)在X射线队列上分别优于以70.19 +-6.58和74.72 +-7.21为70.19 +-6.58和74.72 +-7.21的\ frax等先前的方法。我们的模型在两个基于髋关节ABMD的预测上都跑赢了。我们有信心形式可以在早期阶段改善骨质疏松症的诊断。
translated by 谷歌翻译
高质量数据是现代机器学习的关键方面。但是,人类产生的标签遭受了标签噪声和阶级歧义等问题。我们提出了一个问题,即硬标签是否足以在存在这些固有的不精确的情况下代表基本的地面真相分布。因此,我们将学习的差异与硬和软标签进行定量和定性,以获取合成和现实世界数据集。我们表明,软标签的应用可改善性能,并产生内部特征空间的更常规结构。
translated by 谷歌翻译
高质量数据对于现代机器学习是必需的。但是,由于人类的嘈杂和模棱两可的注释,难以获取此类数据。确定图像标签的这种注释的聚合导致数据质量较低。我们提出了一个以数据为中心的图像分类基准,该基准具有9个现实世界数据集和每个图像的多次注释,以调查和量化此类数据质量问题的影响。我们通过询问如何提高数据质量来关注以数据为中心的观点。在数千个实验中,我们表明多个注释可以更好地近似实际的基础类别分布。我们确定硬标签无法捕获数据的歧义,这可能会导致过度自信模型的常见问题。根据呈现的数据集,基准基准和分析,我们为未来创造了多个研究机会。
translated by 谷歌翻译
一贯的高数据质量对于深度学习领域的新型损失功能和体系结构的发展至关重要。通常假定存在此类数据和标签的存在,而在许多情况下,获取高质量数据集仍然是一个主要问题。在现实世界数据集中,由于注释者的主观注释,我们经常遇到模棱两可的标签。在我们以数据为中心的方法中,我们提出了一种重新标记标签的方法,而不是在神经网络中实施此问题的处理。根据定义,硬分类不足以捕获数据的现实歧义。因此,我们提出了方法“以数据为中心的分类和聚类(DC3)”,该方法结合了半监督分类和聚类。它会自动估计图像的歧义,并根据歧义进行分类或聚类。 DC3本质上是普遍的,因此除了许多半监督学习(SSL)算法外,还可以使用它。平均而言,这会导致分类的F1得分高7.6%,而在多个评估的SSL算法和数据集中,簇的内距离降低了7.9%。最重要的是,我们给出了概念验证,即DC3的分类和聚类是对此类模棱两可标签的手动完善的建议。总体而言,SSL与我们的方法DC3的组合可以在注释过程中更好地处理模棱两可的标签。
translated by 谷歌翻译
The success of neural networks builds to a large extent on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain the neural network's decisions, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we have performed a targeted review focusing on research that aims to associate internal representations with human understandable concepts. In doing this, we added a perspective on the existing research by using primarily deductive nomological explanations as a proposed taxonomy. We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability; is it understanding the ML model or, is it actionable explanations useful in the deployment domain?
translated by 谷歌翻译
Human motion prediction is a complex task as it involves forecasting variables over time on a graph of connected sensors. This is especially true in the case of few-shot learning, where we strive to forecast motion sequences for previously unseen actions based on only a few examples. Despite this, almost all related approaches for few-shot motion prediction do not incorporate the underlying graph, while it is a common component in classical motion prediction. Furthermore, state-of-the-art methods for few-shot motion prediction are restricted to motion tasks with a fixed output space meaning these tasks are all limited to the same sensor graph. In this work, we propose to extend recent works on few-shot time-series forecasting with heterogeneous attributes with graph neural networks to introduce the first few-shot motion approach that explicitly incorporates the spatial graph while also generalizing across motion tasks with heterogeneous sensors. In our experiments on motion tasks with heterogeneous sensors, we demonstrate significant performance improvements with lifts from 10.4% up to 39.3% compared to best state-of-the-art models. Moreover, we show that our model can perform on par with the best approach so far when evaluating on tasks with a fixed output space while maintaining two magnitudes fewer parameters.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This volume contains revised versions of the papers selected for the third volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI.
translated by 谷歌翻译
Time series, sets of sequences in chronological order, are essential data in statistical research with many forecasting applications. Although recent performance in many Transformer-based models has been noticeable, long multi-horizon time series forecasting remains a very challenging task. Going beyond transformers in sequence translation and transduction research, we observe the effects of down-and-up samplings that can nudge temporal saliency patterns to emerge in time sequences. Motivated by the mentioned observation, in this paper, we propose a novel architecture, Temporal Saliency Detection (TSD), on top of the attention mechanism and apply it to multi-horizon time series prediction. We renovate the traditional encoder-decoder architecture by making as a series of deep convolutional blocks to work in tandem with the multi-head self-attention. The proposed TSD approach facilitates the multiresolution of saliency patterns upon condensed multi-heads, thus progressively enhancing complex time series forecasting. Experimental results illustrate that our proposed approach has significantly outperformed existing state-of-the-art methods across multiple standard benchmark datasets in many far-horizon forecasting settings. Overall, TSD achieves 31% and 46% relative improvement over the current state-of-the-art models in multivariate and univariate time series forecasting scenarios on standard benchmarks. The Git repository is available at https://github.com/duongtrung/time-series-temporal-saliency-patterns.
translated by 谷歌翻译
Hyperspectral Imaging (HSI) provides detailed spectral information and has been utilised in many real-world applications. This work introduces an HSI dataset of building facades in a light industry environment with the aim of classifying different building materials in a scene. The dataset is called the Light Industrial Building HSI (LIB-HSI) dataset. This dataset consists of nine categories and 44 classes. In this study, we investigated deep learning based semantic segmentation algorithms on RGB and hyperspectral images to classify various building materials, such as timber, brick and concrete.
translated by 谷歌翻译